-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci: Add python unit test workflows #954
Conversation
I suggest that in the short term after this PR been merged, all PRs need to pass unit tests in macos and linux environments to pass. In the future, after the problems in the windows environment are fixed, all PRs need to pass all unit tests to pass. |
Meanwhile, we have a datasource module, which relies on a third-party database and should not pass CI in single testing.
These tests will migrate to path |
Very much agree with that. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
r+
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
r+
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
r+
@csunny @Aries-ckt Please review, maybe we can use codecov to automatically add test report information to PR comment in the future. |
ci: Temporarily remove the windows environment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
r+
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
r+
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
r+
author penghou.ho <penghou.ho@techronex.com> 1701341533 +0800 committer penghou.ho <penghou.ho@techronex.com> 1707199703 +0800 parent 3f70da4 author penghou.ho <penghou.ho@techronex.com> 1701341533 +0800 committer penghou.ho <penghou.ho@techronex.com> 1707198697 +0800 parent 3f70da4 author penghou.ho <penghou.ho@techronex.com> 1701341533 +0800 committer penghou.ho <penghou.ho@techronex.com> 1707198521 +0800 Add requirements.txt Create only necesasary tables Remove reference info in chat completion result Set disable_alembic_upgrade to True Comment _initialize_awel Comment mount_static_files Fix torch.has_mps deprecated Add API key Comment unused API endpoints Install rocksdict to enable DiskCacheStorage Fix the chat_knowledge missing in chat_mode Update requirements.txt Re-enable awel and add api key check for simple_rag_example DAG Merge main bdf9442 Disable disable_alembic_upgrade Compile bitsandbytes from source and enable verbose Tune the prompt of chat knowledge to only refer to context Add the web static files and uncomment previous unused APIs Add back routers Enable KNOWLEDGE_CHAT_SHOW_RELATIONS Display relation based on CFG.KNOWLEDGE_CHAT_SHOW_RELATIONS Stop reference add to last_output if KNOWLEDGE_CHAT_SHOW_RELATIONS is false Fix always no reference Improve chinese prompts Update requirements.txt Improve prompt Improve prompt Fix prompt variable name Use openhermes-2.5-mistral-7b.Q4_K_M.gguf 1. Fix the delete issue of LlamaCppModel 2. Disable verbose log 3. Update diskcache 4. Remove conda-pack Update chinese prompt and process the model response Extract result from varying tags Add back missing content_matches and put tags regex into variable Update english prompt and decide CANNOT_ANSWER based on language configuration Add 3 new models entries and upgrade bitsandbytes Add few chat templates Update model conversation with fastchat code Revert "Update model conversation with fastchat code" This reverts commit a5dc4b5. Revert "Add few chat templates" This reverts commit e6b6c99. Add OpenHermes-2.5-Mistral-7B chat template Fix missing messages and offset in chat template Update fschat Remove model adapter debugging logs and added conversation template Update chinese chat knowledge prompt Avoid to save the long chat history messages Update chinese chat knowledge prompt Temporary workaround to make the GGUF file use different chat template Use ADD_COLON_SINGLE instead of FALCON_CHAT for separator style Allow no model_name in chat completion request Use starling-lm-7b-alpha.Q5_K_M.gguf Add empty string as system for openchat_3.5 chat template Undo response regex in generate_streaming refactor: Refactor storage and new serve template (eosphoros-ai#947) feat(core): Add API authentication for serve template (eosphoros-ai#950) ci: Add python unit test workflows (eosphoros-ai#954) feat(model): Support Mixtral-8x7B (eosphoros-ai#959) feat(core): Support multi round conversation operator (eosphoros-ai#986) chore(build): Fix typo and new pre-commit config (eosphoros-ai#987) feat(model): Support SOLAR-10.7B-Instruct-v1.0 (eosphoros-ai#1001) refactor: RAG Refactor (eosphoros-ai#985) Co-authored-by: Aralhi <xiaoping0501@gmail.com> Co-authored-by: csunny <cfqsunny@163.com> Upgrade english prompt for chat knowledge
author penghou.ho <penghou.ho@techronex.com> 1701341533 +0800 committer penghou.ho <penghou.ho@techronex.com> 1707199703 +0800 parent 3f70da4 author penghou.ho <penghou.ho@techronex.com> 1701341533 +0800 committer penghou.ho <penghou.ho@techronex.com> 1707198697 +0800 parent 3f70da4 author penghou.ho <penghou.ho@techronex.com> 1701341533 +0800 committer penghou.ho <penghou.ho@techronex.com> 1707198521 +0800 Add requirements.txt Create only necesasary tables Remove reference info in chat completion result Set disable_alembic_upgrade to True Comment _initialize_awel Comment mount_static_files Fix torch.has_mps deprecated Add API key Comment unused API endpoints Install rocksdict to enable DiskCacheStorage Fix the chat_knowledge missing in chat_mode Update requirements.txt Re-enable awel and add api key check for simple_rag_example DAG Merge main bdf9442 Disable disable_alembic_upgrade Compile bitsandbytes from source and enable verbose Tune the prompt of chat knowledge to only refer to context Add the web static files and uncomment previous unused APIs Add back routers Enable KNOWLEDGE_CHAT_SHOW_RELATIONS Display relation based on CFG.KNOWLEDGE_CHAT_SHOW_RELATIONS Stop reference add to last_output if KNOWLEDGE_CHAT_SHOW_RELATIONS is false Fix always no reference Improve chinese prompts Update requirements.txt Improve prompt Improve prompt Fix prompt variable name Use openhermes-2.5-mistral-7b.Q4_K_M.gguf 1. Fix the delete issue of LlamaCppModel 2. Disable verbose log 3. Update diskcache 4. Remove conda-pack Update chinese prompt and process the model response Extract result from varying tags Add back missing content_matches and put tags regex into variable Update english prompt and decide CANNOT_ANSWER based on language configuration Add 3 new models entries and upgrade bitsandbytes Add few chat templates Update model conversation with fastchat code Revert "Update model conversation with fastchat code" This reverts commit a5dc4b5. Revert "Add few chat templates" This reverts commit e6b6c99. Add OpenHermes-2.5-Mistral-7B chat template Fix missing messages and offset in chat template Update fschat Remove model adapter debugging logs and added conversation template Update chinese chat knowledge prompt Avoid to save the long chat history messages Update chinese chat knowledge prompt Temporary workaround to make the GGUF file use different chat template Use ADD_COLON_SINGLE instead of FALCON_CHAT for separator style Allow no model_name in chat completion request Use starling-lm-7b-alpha.Q5_K_M.gguf Add empty string as system for openchat_3.5 chat template Undo response regex in generate_streaming refactor: Refactor storage and new serve template (eosphoros-ai#947) feat(core): Add API authentication for serve template (eosphoros-ai#950) ci: Add python unit test workflows (eosphoros-ai#954) feat(model): Support Mixtral-8x7B (eosphoros-ai#959) feat(core): Support multi round conversation operator (eosphoros-ai#986) chore(build): Fix typo and new pre-commit config (eosphoros-ai#987) feat(model): Support SOLAR-10.7B-Instruct-v1.0 (eosphoros-ai#1001) refactor: RAG Refactor (eosphoros-ai#985) Co-authored-by: Aralhi <xiaoping0501@gmail.com> Co-authored-by: csunny <cfqsunny@163.com> Upgrade english prompt for chat knowledge
author penghou.ho <penghou.ho@techronex.com> 1701341533 +0800 committer penghou.ho <penghou.ho@techronex.com> 1707199703 +0800 parent 3f70da4 author penghou.ho <penghou.ho@techronex.com> 1701341533 +0800 committer penghou.ho <penghou.ho@techronex.com> 1707198697 +0800 parent 3f70da4 author penghou.ho <penghou.ho@techronex.com> 1701341533 +0800 committer penghou.ho <penghou.ho@techronex.com> 1707198521 +0800 Add requirements.txt Create only necesasary tables Remove reference info in chat completion result Set disable_alembic_upgrade to True Comment _initialize_awel Comment mount_static_files Fix torch.has_mps deprecated Add API key Comment unused API endpoints Install rocksdict to enable DiskCacheStorage Fix the chat_knowledge missing in chat_mode Update requirements.txt Re-enable awel and add api key check for simple_rag_example DAG Merge main bdf9442 Disable disable_alembic_upgrade Compile bitsandbytes from source and enable verbose Tune the prompt of chat knowledge to only refer to context Add the web static files and uncomment previous unused APIs Add back routers Enable KNOWLEDGE_CHAT_SHOW_RELATIONS Display relation based on CFG.KNOWLEDGE_CHAT_SHOW_RELATIONS Stop reference add to last_output if KNOWLEDGE_CHAT_SHOW_RELATIONS is false Fix always no reference Improve chinese prompts Update requirements.txt Improve prompt Improve prompt Fix prompt variable name Use openhermes-2.5-mistral-7b.Q4_K_M.gguf 1. Fix the delete issue of LlamaCppModel 2. Disable verbose log 3. Update diskcache 4. Remove conda-pack Update chinese prompt and process the model response Extract result from varying tags Add back missing content_matches and put tags regex into variable Update english prompt and decide CANNOT_ANSWER based on language configuration Add 3 new models entries and upgrade bitsandbytes Add few chat templates Update model conversation with fastchat code Revert "Update model conversation with fastchat code" This reverts commit a5dc4b5. Revert "Add few chat templates" This reverts commit e6b6c99. Add OpenHermes-2.5-Mistral-7B chat template Fix missing messages and offset in chat template Update fschat Remove model adapter debugging logs and added conversation template Update chinese chat knowledge prompt Avoid to save the long chat history messages Update chinese chat knowledge prompt Temporary workaround to make the GGUF file use different chat template Use ADD_COLON_SINGLE instead of FALCON_CHAT for separator style Allow no model_name in chat completion request Use starling-lm-7b-alpha.Q5_K_M.gguf Add empty string as system for openchat_3.5 chat template Undo response regex in generate_streaming refactor: Refactor storage and new serve template (eosphoros-ai#947) feat(core): Add API authentication for serve template (eosphoros-ai#950) ci: Add python unit test workflows (eosphoros-ai#954) feat(model): Support Mixtral-8x7B (eosphoros-ai#959) feat(core): Support multi round conversation operator (eosphoros-ai#986) chore(build): Fix typo and new pre-commit config (eosphoros-ai#987) feat(model): Support SOLAR-10.7B-Instruct-v1.0 (eosphoros-ai#1001) refactor: RAG Refactor (eosphoros-ai#985) Co-authored-by: Aralhi <xiaoping0501@gmail.com> Co-authored-by: csunny <cfqsunny@163.com> Upgrade english prompt for chat knowledge
author penghou.ho <penghou.ho@techronex.com> 1701341533 +0800 committer penghou.ho <penghou.ho@techronex.com> 1707199703 +0800 parent 3f70da4 author penghou.ho <penghou.ho@techronex.com> 1701341533 +0800 committer penghou.ho <penghou.ho@techronex.com> 1707198697 +0800 parent 3f70da4 author penghou.ho <penghou.ho@techronex.com> 1701341533 +0800 committer penghou.ho <penghou.ho@techronex.com> 1707198521 +0800 Add requirements.txt Create only necesasary tables Remove reference info in chat completion result Set disable_alembic_upgrade to True Comment _initialize_awel Comment mount_static_files Fix torch.has_mps deprecated Add API key Comment unused API endpoints Install rocksdict to enable DiskCacheStorage Fix the chat_knowledge missing in chat_mode Update requirements.txt Re-enable awel and add api key check for simple_rag_example DAG Merge main bdf9442 Disable disable_alembic_upgrade Compile bitsandbytes from source and enable verbose Tune the prompt of chat knowledge to only refer to context Add the web static files and uncomment previous unused APIs Add back routers Enable KNOWLEDGE_CHAT_SHOW_RELATIONS Display relation based on CFG.KNOWLEDGE_CHAT_SHOW_RELATIONS Stop reference add to last_output if KNOWLEDGE_CHAT_SHOW_RELATIONS is false Fix always no reference Improve chinese prompts Update requirements.txt Improve prompt Improve prompt Fix prompt variable name Use openhermes-2.5-mistral-7b.Q4_K_M.gguf 1. Fix the delete issue of LlamaCppModel 2. Disable verbose log 3. Update diskcache 4. Remove conda-pack Update chinese prompt and process the model response Extract result from varying tags Add back missing content_matches and put tags regex into variable Update english prompt and decide CANNOT_ANSWER based on language configuration Add 3 new models entries and upgrade bitsandbytes Add few chat templates Update model conversation with fastchat code Revert "Update model conversation with fastchat code" This reverts commit a5dc4b5. Revert "Add few chat templates" This reverts commit e6b6c99. Add OpenHermes-2.5-Mistral-7B chat template Fix missing messages and offset in chat template Update fschat Remove model adapter debugging logs and added conversation template Update chinese chat knowledge prompt Avoid to save the long chat history messages Update chinese chat knowledge prompt Temporary workaround to make the GGUF file use different chat template Use ADD_COLON_SINGLE instead of FALCON_CHAT for separator style Allow no model_name in chat completion request Use starling-lm-7b-alpha.Q5_K_M.gguf Add empty string as system for openchat_3.5 chat template Undo response regex in generate_streaming refactor: Refactor storage and new serve template (eosphoros-ai#947) feat(core): Add API authentication for serve template (eosphoros-ai#950) ci: Add python unit test workflows (eosphoros-ai#954) feat(model): Support Mixtral-8x7B (eosphoros-ai#959) feat(core): Support multi round conversation operator (eosphoros-ai#986) chore(build): Fix typo and new pre-commit config (eosphoros-ai#987) feat(model): Support SOLAR-10.7B-Instruct-v1.0 (eosphoros-ai#1001) refactor: RAG Refactor (eosphoros-ai#985) Co-authored-by: Aralhi <xiaoping0501@gmail.com> Co-authored-by: csunny <cfqsunny@163.com> Upgrade english prompt for chat knowledge
Here is the result of this workflow on my repository.